Scaled, Inexact, and Adaptive Generalized FISTA for Strongly Convex Optimization

نویسندگان

چکیده

We consider a variable metric and inexact version of the fast iterative soft-thresholding algorithm (FISTA) type considered in [L. Calatroni A. Chambolle, SIAM J. Optim., 29 (2019), pp. 1772--1798; Chambolle T. Pock, Acta Numer., 25 (2016), 161--319] for minimization sum two (possibly strongly) convex functions. The proposed is combined with an adaptive (nonmonotone) backtracking strategy, which allows adjustment algorithmic step-size along iterations order to improve convergence speed. prove linear result function values, depends on both strong convexity moduli functions upper lower bounds spectrum operators. validate algorithm, named Scaled Adaptive GEneralized FISTA (SAGE-FISTA), exemplar image denoising deblurring problems where edge-preserving total variation (TV) regularization Kullback--Leibler-type fidelity terms, as common applications signal-dependent Poisson noise assumed data.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Analysis of ISTA and FISTA for “Strongly + Semi” Convex Programming

The iterative shrinkage/thresholding algorithm (ISTA) and its faster version FISTA have been widely used in the literature. In this paper, we consider general versions of the ISTA and FISTA in the more general “strongly + semi” convex setting, i.e., minimizing the sum of a strongly convex function and a semiconvex function; and conduct convergence analysis for them. The consideration of a semic...

متن کامل

Adaptive Fista

In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA), however we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quas...

متن کامل

Distributed Optimization for Non-Strongly Convex Regularizers

We develop primal-dual algorithms for distributed training of linear models in the Spark framework. We present the ProxCoCoA+ method which represents a generalization of the CoCoA+ algorithm and extends it to the case of general strongly convex regularizers. A primal-dual convergence rate analysis is provided along with an experimental evaluation of the algorithm on the problem of elastic net r...

متن کامل

First-order methods with inexact oracle: the strongly convex case

The goal of this paper is to study the effect of inexact first-order information on the first-order methods designed for smooth strongly convex optimization problems. It can be seen as a generalization to the strongly convex case of our previous paper [1]. We introduce the notion of (!,L,μ)-oracle, that can be seen as an extension of the (!,L)-oracle (previously introduced in [1]), taking into ...

متن کامل

Inexact Alternating Direction Methods of Multipliers for Separable Convex Optimization

Abstract. Inexact alternating direction multiplier methods (ADMMs) are developed for solving general separable convex optimization problems with a linear constraint and with an objective that is the sum of smooth and nonsmooth terms. The approach involves linearized subproblems, a back substitution step, and either gradient or accelerated gradient techniques. Global convergence is established. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Siam Journal on Optimization

سال: 2022

ISSN: ['1095-7189', '1052-6234']

DOI: https://doi.org/10.1137/21m1391699